Ordinary Differential Equations

2014-04-26

  • 1 1 order ODE
    • 1.1 Separable variable equation
    • 1.2 Exact equation
    • 1.3 Inexact equation - integration factor
  • 2 High order linear ODE with constant coefficients
    • 2.1 Wronskian
    • 2.2 Find complementary solution
    • 2.3 Find particular solution
    • 2.4 Laplace transform method
  • 3 High order linear ODE with variable coefficients
    • 3.1 2-order linear ODE
    • 3.2 Series solution of linear ODE
      • 3.2.1 Basics on series solution of 2-order linear ODE
      • 3.2.2 Series solutions of 2-order linear ODE
      • 3.2.3 On the indicial equation
      • 3.2.4 On the methods to find the second solution
  • 4 Nonlinear ODE
  • 5 Miscellaneous

1 1 order ODE

Solution is categorized as + closed form + infinite series

1.1 Separable variable equation

The general form is

\[ \begin{equation} \frac{dy}{dx} = f(x) g(y) \end{equation} \]

Then rearrange the equation as

\[ \begin{equation} \int \frac{dy}{g(y)} = \int f(x) dx \end{equation} \]

1.2 Exact equation

Exact equation is of this form

\[ \begin{equation} A(x, y) dx + B(x, y) dy = 0, \quad \frac{\partial A}{\partial y} = \frac{\partial B}{\partial x} \end{equation} \]

\(A(x, y) dx + B(x, y) dy\) is an exact differential. We restate it as

\[ \begin{equation} A(x, y) dx + B(x, y) dy = dU = \frac{\partial U}{\partial x} dx + \frac{\partial U}{\partial y} dy \end{equation} \]

The above statement requires

\[ \begin{equation} \frac{\partial A}{\partial y} = \frac{\partial B}{\partial x} \end{equation} \]

By apply the following equations,

\[ \begin{align} \frac{\partial U}{\partial x} &= A(x, y) \\ \frac{\partial U}{\partial y} &= B(x, y) \\ \end{align} \]

We obtain \(U(x, y)\). So, the exact equation can be integrated directly.1

\[ \begin{equation} \int dU = C \end{equation} \]

For example, solve the following ODE.

\[ (3x + y) dx + x dy = 0 \]

Regard \(A(x, y) = 3x + y, B(x, y) = x\). Then we get

\[ \begin{align} \frac{\partial U}{\partial x} &= 3x + y \\ \frac{\partial U}{\partial y} &= x \\ \end{align} \]

Integrate the second equation with respect to \(y\), we get \(U(x, y) = xy + f(x)\). Substitute the solution into the first equation, we will get \(y + f^{\prime}(x) = y + 3x\). Obviously, \(U(x, y) = xy + 3/2 x^{2} + C_{0}\). The integration \(\int dU = C_{1}\) is equivalent to \(U(x, y) = C_{1}\). So, the final solution is \(xy + 3/2 x^{2} = C\).

1.3 Inexact equation - integration factor

Inexact equation is of this form

\[ \begin{equation} A(x, y) dx + B(x, y) dy = 0, \quad \frac{\partial A}{\partial y} \neq \frac{\partial B}{\partial x} \end{equation} \]

Although the above equation is inexact, it can be exact by multiply an integration factor.

\[ \begin{equation} \frac{\partial \mu A}{\partial y} = \frac{\partial \mu B}{\partial x} \end{equation} \]

Integration factor \(\mu\) cannot be a function of both \(x\) and \(y\). Instead, it is just a function of single \(x\) or single \(y\). Then this equation can be solved by the following method. For instance, \(\mu(x, y)\) is just function of \(x\). Then the above equation is reduced to be

\[ \begin{equation} \mu \frac{\partial A}{\partial y} = \mu \frac{\partial B}{\partial x} + B \frac{d \mu}{dx} \end{equation} \]

Then rearrange it as

\[ \begin{equation} \frac{d \mu}{\mu} = \Big ( \frac{\partial A}{\partial y} - \frac{\partial B}{\partial x} \Big ) \frac{dx}{B} \end{equation} \]

Here, another assumption is \(\Big ( \frac{\partial A}{\partial y} - \frac{\partial B}{\partial x} \Big )\) is just function of single \(x\). Then integrate above equation, we can get the integration factor expression. Substitute into the original equation and get exact equation.

2 High order linear ODE with constant coefficients

The general form is

\[ \begin{equation} a_{n}(x) \frac{d^{n}y}{dx^{n}} + a_{n-1}(x) \frac{d^{n-1}y}{dx^{n-1}} + ... + a_{1}(x) \frac{dy}{dx} + a_{0}(x) y = f(x) \end{equation} \]

Find complementary solution first and then find particular solution. Combine them.

2.1 Wronskian

Whether high order linear ODE is linear or not depends on Wronskian.

2.2 Find complementary solution

The solution is to assume \(y = A e^{\lambda x}\) and substitute into the original equation. Get rid of the exponential part, we are left with the auxiliary equation.

\[ \begin{equation} a_{n}(x) \lambda^{n} + a_{n-1}(x) \lambda^{n-1} + ... + a_{1}(x) \lambda^{1} + a_{0}(x) = 0 \end{equation} \]

Three main cases on the solution of auxiliary equation. + All roots are real and distinct + Some roots are complex + Some roots are repeated

2.3 Find particular solution

There is no generally applicable method to find a particular solution. However, there are some standard situations where we can employ the method of undetermined coefficients.

We can use the method of Variation of Parameters to find a particular solution of equation in this form.

\[ \begin{equation} y^{\prime \prime} + P(x) y^{\prime} + Q(x) y = 0 \end{equation} \]

2.4 Laplace transform method

Taking Laplace transform method transforms the original ODE into a purely algebraic equation. Once the solution of the algebric equation is obtained, taking inverse Laplace transform will get the solution of the original ODE. Specifically, employ 2 formulas from the Laplace theory to transform the original ODE. And the right side of the equation is also transformed by Laplace transform method, in which circumstances it is always done by referring the standard table. Formulas are as follows

\[ \begin{align} \bar{f}(s) &= \int^{\infty}_{0} e^{-sx} f(x) dx \\ \bar{f^{n}}(s) &= s^{n} \bar{f}(s) - s^{n-1} f(0) - s^{n-2} f^{\prime}(0) - ... - s f^{n-2}(0) - f^{n-1}(0) \\ \end{align} \]

3 High order linear ODE with variable coefficients

There is no generally applicable method to find solution of this type ODE. Nevertheless, there are some cases in which a solution is possible.

3.1 2-order linear ODE

Standard form

\[ \begin{equation} y^{\prime \prime} + P(x) y^{\prime} + Q(x) y = 0 \end{equation} \]

If we have a solution \(y_{1}(x)\), then we can define \(y = u(x) y_{1}(x)\), the derivative \(y^{\prime} = u^{\prime} y_{1} + u y^{\prime}_{1}, y^{\prime \prime} = u^{\prime \prime} y_{1} + 2 u^{\prime} y^{\prime}_{1} + u y^{\prime \prime}_{1}\). Substitute into the original equation

\[ \begin{equation} y^{\prime \prime} + P y^{\prime} + Q y = u ( y^{\prime \prime}_{1} + P y^{\prime}_{1} + Q y_{1} ) + u^{\prime} ( 2 y^{\prime}_{1} + P y_{1} ) + u^{\prime \prime} y_{1} = 0 \end{equation} \]

Define \(w = u^{\prime}\), then

\[ \begin{equation} y_{1} w^{\prime} + ( 2 y^{\prime}_{1} + P y_{1} ) w = 0 \end{equation} \]

Separate variables

\[ \begin{align} \frac{dw}{w} + \frac{2 y^{\prime}_{1}}{y_{1}} dx + P dx &= 0 \\ \int \frac{1}{w} dw + \int \frac{2 y^{\prime}_{1}}{y_{1}} dx + \int P dx &= 0 \end{align} \]

3.2 Series solution of linear ODE

A method for obtaining solutions to linear ODEs in the form of convergent series.

Indeed, some elementary functions can be seen as convergent series and they are given special names such as \(sin x\), \(cos x\) or \(exp x\).

3.2.1 Basics on series solution of 2-order linear ODE

  1. the general form

    \[ \begin{align} y^{\prime \prime} + p(x) y^{\prime} + q(x) y = 0 \end{align} \]

    The general form of solutions is as follows

    \[ \begin{align} y(x) = c_{1} y_{1}(x) + c_{2} y_{2}(x) \end{align} \]

    To determine whether \(y_{1}\) and \(y_{2}\) are independent, use Wronskian.

    \[ \begin{align} W(x) = \begin{vmatrix} y_{1} & y_{2} \\ y^{\prime}_{1} & y^{\prime}_{2} \\ \end{vmatrix} = y_{1} y^{\prime}_{2} - y_{2} y^{\prime}_{1} \end{align} \]

    If Wronskian is not \(0\) everywhere in a given interval, then \(y_{1}, y_{2}\) are linearly independent in that interval.

    Another way to evaluate Wronskian

    \[ \begin{align} W^{\prime} = y_{1} y^{\prime \prime}_{2} - y_{2} y^{\prime \prime}_{1} = - p W \end{align} \]

    Integrate and we find

    \[ \begin{align} W(x) = C exp \{ - \int^{x} p(u) du \} \end{align} \]

  2. Points classification

    At some point \(z = z_{0}\), coefficient \(p(z), q(z)\) are finite and can be expressed as complex power series, then we say \(p(z), q(z)\) are analytic at \(z = z_{0}\) and \(z = z_{0}\) is an ordinary point. If \(p(z), q(z)\) or both diverge, then this point is singular point.

    If ODE is singular at \(z = z_{0}\), it may still possess a non-singular (finite) solution at \(z = z_{0}\). The necessary and sufficient condition for such a solution to exist is that \((z - z_{0}) p(z), (z - z_{0})^{2} q(z)\) are both analytic at \(z = z_{0}\). Under such condition, this point is called regular singular point. If not, this point is called irregular or essential singularity.

    Sometimes we want to investigate \(|z| \rightarrow \infty\), substitute \(w = 1/z\).

3.2.2 Series solutions of 2-order linear ODE

  1. ordinary point

    There is power series of such form

    \[ \begin{align} y(z) = \sum^{\infty}_{n=0} a_{n} z^{n} \end{align} \]

    The convergence radius \(R\) is the distance from \(z = 0\) to the nearest singularity point.

    Substitute power series of the above form into the original ODE, we can get recurrence relation. Then by the recurrence relation, we get all coefficients.

  2. regular singular point

    Suppose \(z = 0\) is a regular singular point of the equation

    \[ \begin{align} y^{\prime \prime} + p(x) y^{\prime} + q(x) y = 0 \end{align} \]

    Fuch’s theorem shows that there exist at least 1 solution of the form

    \[ \begin{align} y = z^{\sigma} \sum^{\infty}_{n=0} a_{n} z^{n} \end{align} \]

    \(\sigma\) is real or complex number, \(a_{0} \neq 0\).

    Such series is Frobenius series.

    The convergence radius \(R\) is the distance from \(z = 0\) to the nearest singularity point.

    Rearrange ODE and substitute Frobenius series, we obtain

    \[ \begin{align} \sum^{\infty}_{n=0} \Big [ (n + \sigma)(n + \sigma - 1) + s(z) (n + \sigma) + t(z) \Big ] a_{n} z^{n} = 0 \end{align} \]

    The solution is about \(z = 0\), so at this point, we get indicial equation

    \[ \begin{align} \sigma (\sigma - 1) + s(0) \sigma + t(0) = 0 \end{align} \]

3.2.3 On the indicial equation

  • \(\sigma_{1} - \sigma_{2} \neq integer\)

    There are two linearly independent solutions of the ODE

    \[ \begin{align} y_{1}(z) &= Z^{\sigma_{1}} \sum^{\infty}_{n=0} a_{n} z^{n} \\ y_{2}(z) &= Z^{\sigma_{2}} \sum^{\infty}_{n=0} b_{n} z^{n} \\ \end{align} \]

    Maybe the linear independence of solutions can be verified by the Wronskian.

  • \(\sigma_{1} = \sigma_{2}\)

    Obviously only one solution in the form of Frobenius series can be found. We must use another method to find the second solution.

  • \(\sigma_{1} - \sigma_{2} = integer\)

    The root of larger real part always lead to a solution. Under this condition, the root of the smaller real part may or may not lead to a second linearly independent solution. We must use another method to find the second solution.

3.2.4 On the methods to find the second solution

4 Nonlinear ODE

There is no generally applicable method to find solution of this type ODE.

5 Miscellaneous

  1. Homogeneous boundary condition in BVP means the values at boundary are \(0\).

  2. In BVP, if the boundary condition is homogeneous, then there exists trivial solution \(y = 0\). However, what has practical values is non-trivial solution, which requires the determination of eigenvalues to find non-trivial solution. That is the eigenvalue problem.


  1. I cannot understand the next step, which is already listed in any ODE textbook. I think that direct integration of this equation is apparent.↩︎

.
Created on 2014-04-26 with pandoc